Search results for "Frequentist inference"

showing 10 items of 26 documents

Retract p < 0.005 and propose using JASP, instead

2018

Seeking to address the lack of research reproducibility in science, including psychology and the life sciences, a pragmatic solution has been raised recently:  to use a stricter p < 0.005 standard for statistical significance when claiming evidence of new discoveries. Notwithstanding its potential impact, the proposal has motivated a large mass of authors to dispute it from different philosophical and methodological angles. This article reflects on the original argument and the consequent counterarguments, and concludes with a simpler and better-suited alternative that the authors of the proposal knew about and, perhaps, should have made from their Jeffresian perspective: to use a Bayes …

0301 basic medicineData SharingOpen scienceComputer scienceresearch evidenceGeneral Biochemistry Genetics and Molecular Biology03 medical and health sciences0302 clinical medicineArgumentFrequentist inferenceOrder (exchange)practical significanceBayes factorsPrior probabilityreplicabilityp-valueGeneral Pharmacology Toxicology and Pharmaceuticsreproducibilitystatistical significancePotential impactGeneral Immunology and MicrobiologyPerspective (graphical)Bayes factorArticlesGeneral MedicineOpinion ArticleEpistemology030104 developmental biologyp-values030217 neurology & neurosurgeryF1000Research
researchProduct

Rejection odds and rejection ratios: A proposal for statistical practice in testing hypotheses

2016

Much of science is (rightly or wrongly) driven by hypothesis testing. Even in situations where the hypothesis testing paradigm is correct, the common practice of basing inferences solely on p-values has been under intense criticism for over 50 years. We propose, as an alternative, the use of the odds of a correct rejection of the null hypothesis to incorrect rejection. Both pre-experimental versions (involving the power and Type I error) and post-experimental versions (depending on the actual data) are considered. Implementations are provided that range from depending only on the p-value to consideration of full Bayesian analysis. A surprise is that all implementations -- even the full Baye…

Bayes' ruleFOS: Computer and information sciencesComputer sciencemedia_common.quotation_subjectBayesian probabilityBayesian01 natural sciencesArticle050105 experimental psychologyStatistical powerOddsMethodology (stat.ME)010104 statistics & probabilityFrequentist inferenceBayes factorsEconometrics0501 psychology and cognitive sciencesp-value0101 mathematicsFrequentistPsychology(all)General PsychologyStatistics - Methodologymedia_commonMathematicsStatistical hypothesis testingApplied Mathematics05 social sciencesBayes factorSurpriseOddsNull hypothesisType I and type II errorsJournal of Mathematical Psychology
researchProduct

Bayesian Methodology in Statistics

2009

Bayesian methods provide a complete paradigm for statistical inference under uncertainty. These may be derived from an axiomatic system and provide a coherent methodology which makes it possible to incorporate relevant initial information, and which solves many of the difficulties that frequentist methods are known to face. If no prior information is to be assumed, the more frequent situation met in scientific reporting, a formal initial prior function, the reference prior, mathematically derived from the assumed model, is used; this leads to objective Bayesian methods, objective in the precise sense that their results, like frequentist results, only depend on the assumed model and the data…

Bayesian statisticsBayes' theoremFrequentist inferenceStatisticsPrior probabilityBayesian hierarchical modelingBayes factorBayesian inferenceBayesian linear regressionMathematics
researchProduct

Estimation and visualization of confusability matrices from adaptive measurement data

2010

Abstract We present a simple but effective method based on Luce’s choice axiom [Luce, R.D. (1959). Individual choice behavior: A theoretical analysis. New York: John Wiley & Sons] for consistent estimation of the pairwise confusabilities of items in a multiple-choice recognition task with arbitrarily chosen choice-sets. The method combines the exact (non-asymptotic) Bayesian way of assessing uncertainty with the unbiasedness emphasized in the classical frequentist approach. We apply the method to data collected using an adaptive computer game designed for prevention of reading disability. A player’s estimated confusability of phonemes (or more accurately, phoneme–grapheme connections) and l…

Computer sciencebusiness.industryApplied MathematicsBayesian probabilityConfusion matrixMachine learningcomputer.software_genreComputer gameVisualizationBayesian statisticsFrequentist inferencePairwise comparisonArtificial intelligencebusinesscomputerAlgorithmGeneral PsychologyAxiomJournal of Mathematical Psychology
researchProduct

Inference for Lorenz curve orderings

1999

In this paper we consider the issue of performing statistical inference for Lorenz curve orderings. This involves testing for an ordered relationship in a multivariate context and making comparisons among more than two population distributions. Our approach is to frame the hypotheses of interest as sets of linear inequality constraints on the vector of Lorenz curve ordinates, and apply order-restricted statistical inference to derive test statistics and their sampling distributions. We go on to relate our results to others which have appeared in recent literature, and use Monte Carlo analysis to highlight their respective properties and comparative performances. Finally, we discuss in gener…

Economics and EconometricsLinear inequalitySampling distributionFrequentist inferenceEconometricsFiducial inferenceStatistical inferenceInferenceLorenz curveMathematical economicsStatistical hypothesis testingMathematicsThe Econometrics Journal
researchProduct

WEIGHTED-AVERAGE LEAST SQUARES (WALS): A SURVEY

2014

Model averaging has become a popular method of estimation, following increasing evidence that model selection and estimation should be treated as one joint procedure. Weighted- average least squares (WALS) is a recent model-average approach, which takes an intermediate position between frequentist and Bayesian methods, allows a credible treatment of ignorance, and is extremely fast to compute. We review the theory of WALS and discuss extensions and applications.

Economics and EconometricsModel selection05 social sciencesBayesian probability01 natural sciencesLeast squares010104 statistics & probabilityFrequentist inferencePosition (vector)0502 economics and businessStatisticsPrior probability0101 mathematicsWeighted arithmetic mean050205 econometrics MathematicsJournal of Economic Surveys
researchProduct

Sampling properties of the Bayesian posterior mean with an application to WALS estimation

2022

Many statistical and econometric learning methods rely on Bayesian ideas, often applied or reinterpreted in a frequentist setting. Two leading examples are shrinkage estimators and model averaging estimators, such as weighted-average least squares (WALS). In many instances, the accuracy of these learning methods in repeated samples is assessed using the variance of the posterior distribution of the parameters of interest given the data. This may be permissible when the sample size is large because, under the conditions of the Bernstein--von Mises theorem, the posterior variance agrees asymptotically with the frequentist variance. In finite samples, however, things are less clear. In this pa…

Economics and EconometricsWALS.SDG 16 - PeaceSettore SECS-P/05Monte Carlo methodBayesian probabilityPosterior probabilitySettore SECS-P/05 - EconometriaDouble-shrinkage estimators01 natural sciencesLeast squares010104 statistics & probabilityFrequentist inference0502 economics and businessStatisticsPosterior moments and cumulantsStatistics::Methodology0101 mathematicsdouble-shrinkage estimator050205 econometrics MathematicsWALSLocation modelApplied Mathematics05 social sciencesSDG 16 - Peace Justice and Strong InstitutionsUnivariateSampling (statistics)EstimatorVariance (accounting)/dk/atira/pure/sustainabledevelopmentgoals/peace_justice_and_strong_institutionsJustice and Strong InstitutionsSample size determinationposterior moments and cumulantNormal location modelJournal of Econometrics
researchProduct

Improved Frequentist Prediction Intervals for Autoregressive Models by Simulation

2015

It is well known that the so called plug-in prediction intervals for autoregressive processes, with Gaussian disturbances, are too narrow, i.e. the coverage probabilities fall below the nominal ones. However, simulation experiments show that the formulas borrowed from the ordinary linear regression theory yield one-step prediction intervals, which have coverage probabilities very close to what is claimed. From a Bayesian point of view the resulting intervals are posterior predictive intervals when uniform priors are assumed for both autoregressive coefficients and logarithm of the disturbance variance. This finding opens the path how to treat multi-step prediction intervals which are obtain…

GaussianPrediction intervalsymbols.namesakeautoregressive modelsAutoregressive modelFrequentist inferenceprediction intervalsStatisticsCredible intervalEconometricssymbolssimulointiSTAR modelMathematics
researchProduct

Weighted-average least squares estimation of generalized linear models

2018

The weighted-average least squares (WALS) approach, introduced by Magnus et al. (2010) in the context of Gaussian linear models, has been shown to enjoy important advantages over other strictly Bayesian and strictly frequentist model averaging estimators when accounting for problems of uncertainty in the choice of the regressors. In this paper we extend the WALS approach to deal with uncertainty about the specification of the linear predictor in the wider class of generalized linear models (GLMs). We study the large-sample properties of the WALS estimator for GLMs under a local misspecification framework that allows the development of asymptotic model averaging theory. We also investigate t…

Generalized linear modelEconomics and EconometricsGeneralized linear modelsBayesian probabilityGeneralized linear modelSettore SECS-P/05 - EconometriaLinear predictionContext (language use)01 natural sciencesLeast squares010104 statistics & probabilityWALS; Model averaging; Generalized linear models; Monte Carlo; AttritionFrequentist inference0502 economics and businessAttritionEconometricsApplied mathematicsStatistics::Methodology0101 mathematicsMonte Carlo050205 econometrics MathematicsWALSApplied Mathematics05 social sciencesLinear modelEstimatorModel averaging
researchProduct

Bayesian Survival Analysis to Model Plant Resistance and Tolerance to Virus Diseases

2017

Viruses constitute a major threat to large-scale production of crops worldwide producing important economical losses and undermining sustainability. We evaluated a new plant variety for resistance and tolerance to a specific virus through a comparison with other well-known varieties. The study is based on two independent Bayesian accelerated failure time models which assess resistance and tolerance survival times. Information concerning plant genotype and virus biotype were considered as baseline covariates and error terms were assumed to follow a modified standard Gumbel distribution. Frequentist approach to these models was also considered in order to compare the results of the study from…

Gumbel distributionResistance (ecology)Frequentist inferencebusiness.industryCovariateBayesian probabilityPlant breedingBiologyAccelerated failure time modelbusinessSurvival analysisBiotechnology
researchProduct